Crowdsourcing a HIT: Measuring Workers' Pre-Task Interactions on Microtask Markets
نویسندگان
چکیده
The ability to entice and engage crowd workers to participate in human intelligence tasks (HITs) is critical for many human computation systems and large-scale experiments. While various metrics have been devised to measure and improve the quality of worker output via task designs, effective recruitment of crowd workers is often overlooked. To help us gain a better understanding of crowd recruitment strategies we propose three new metrics for measuring crowd workers’ willingness to participate in advertised HITs: conversion rate, conversion rate over time, and nominal conversion rate. We discuss how the conversion rate of workers—the number of potential workers aware of a task that choose to accept the task—can affect the quantity, quality, and validity of any data collected via crowdsourcing. We also contribute a tool—turkmill—that enables requesters on Amazon Mechanical Turk to easily measure the conversion rate of HITs. We then present the results of two experiments that demonstrate how conversion rate metrics can be used to evaluate the effect of different HIT designs. We investigate how four HIT design features (value proposition, branding, quality of presentation, and intrinsic motivation) affect conversion rates. Among other things, we find that including a clear value proposition has a strong significant, positive effect on the nominal conversion rate. We also find that crowd workers prefer commercial entities to non-profit or university requesters.
منابع مشابه
Inter-Task Effects Induce Bias in Crowdsourcing
Microtask platforms allow researchers to engage participants quickly and inexpensively. Workers on such platforms probably perform many tasks in succession, so we investigate interactions between earlier tasks and later ones, which we call inter-task effects. Existing research investigates many task design factors, such as framing, on the quality of responses, but to our knowledge, does not add...
متن کاملIt's getting crowded!: improving the effectiveness of microtask crowdsourcing
Microtask crowdsouring has emerged as an excellent means to acquire human input on demand, and has found widespread application in solving a variety of problems. Popular examples include surveys, content creation, acquisition of image annotations, etc. However, there are a number of challenges that need to be overcome to realize the true potential of this paradigm. With an aim to improve the ef...
متن کاملUsing Worker Quality Scores to Improve Stopping Rules
We consider the crowdsourcing task of learning the answer to simple multiple-choice microtasks. In order to provide statistically significant results, one often needs to ask multiple workers to answer the same microtask. A stopping rule is an algorithm that for a given microtask decides for any given set of worker answers if the system should stop and output an answer or iterate and ask one mor...
متن کاملWorker Viewpoints: Valuable Feedback for Microtask Designers in Crowdsourcing
One of the problems a requester faces when crowdsourcing a microtask is that, due to the underspecifie or ambiguous task description, workers may misinterpret the microtask at hand. We call a set of such interpretations worker viewpoints. In this paper, we argue that assisting requesters to gather a worker’s interpretation of the microtask can help in providing useful feedback to designers, who...
متن کاملExploring Microtask Crowdsourcing as a Means of Fault Localization
Microtask crowdsourcing is the practice of breaking down an overarching task to be performed into numerous, small, and quick microtasks that are distributed to an unknown, large set of workers. Microtask crowdsourcing has shown potential in other disciplines, but with only a handful of approaches explored to date in software engineering, its potential in our field remains unclear. In this paper...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013